Previous Blogs

January 17, 2024
Samsung Focuses Galaxy S24 Upgrades on Software

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

February 1, 2024
How Will GenAI Impact Our Devices?

By Bob O'Donnell

Now that we’re into year 2 of the Generative AI phenomenon, it’s time to start digging into the second and third order implications of what this technology revolution may actually bring about.

One question that’s evolved quite a bit over the past year is the potential impact that GenAI will have on things like PCs, smartphones, tablets, wearables, and other devices.

Initially, it seemed like these various computing gadgets wouldn’t be much of a factor, as all the early GenAI applications and services were done in the cloud and simply accessed like any other website or application. The presumption was that the computing requirements for doing these kinds of workloads were way beyond the reach of even the most powerful personal computing systems and could only be handled by mega datacenters in the cloud.

That viewpoint started to shift throughout last year, however, thanks to a variety of different developments, as well as the remarkable speed at which evolutions and advancements in the GenAI world were occurring. Notably, we saw the introduction of multiple foundation models with less than 10 billion parameters that could not only fit within the memory and compute limits of our various personal devices but were actually designed to run on them.

Versions of Meta’s Llama2, Google’s Gemini, Stability AI’s Stable Diffusion, and more have all been demonstrated running on PCs and/or phones, and there’s been even more speculation on efforts that companies like Microsoft, Apple and many others are doing work to make this an everyday reality in 2024.

In addition, technology advancements like model quantization and pruning, as well as implementation concepts like RAG (retrieval augmented generation), are bringing huge amounts of focus to the idea of running foundation models and GenAI applications on the device. In fact, it has arguably reached the point now in which on-device AI is pretty much accepted as a given.

At Samsung’s recent Galaxy S24 launch event (see “Samsung Focuses Galaxy S24 Upgrades on Software”), for example, we saw some of the first real-world implementations of on-device AI with its Galaxy AI features such as real-time translation. I expect to see many more examples of shipping products (as well as innumerable in-development demonstrations) throughout this year and into next.

On top of these software innovations has been multiple device-oriented AI hardware acceleration capabilities in advanced new semiconductors. Qualcomm got things started earlier last year with demos of GenAI models running on its Snapdragon 8 Gen 3 line of mobile phone processors. The company continued that theme with the announcement of its Snapdragon X Elite PC processor slated for release in the middle of this year (see “Qualcomm’s Snapdragon X Elite Solidifies New Era of AI PCs” for more). AMD also debuted the Ryzen 7040 PC SOC last year with its first Ryzen AI NPU and brought out the updated 8040 at the end of last year (see “AMD Makes Definitive GenAI Statement” for more). Finally, Intel closed out 2023 with the launch of its long awaited Core Ultra SOC, which is its first to incorporate a dedicated AI accelerator (see “Intel Refines Its Computing Vision” for more.)

Even more impressive than this flurry of activity on the semiconductor side is the accelerated pace at which new launches are expected in the coming months. Both AMD and Intel, for example, are expected to have even more powerful PC-focused chips with better AI accelerators before the end of the year. For its part, Microsoft has also hinted at several new capabilities coming to Windows PCs that will leverage these new chips. Plus, though it has been remarkably silent on the topic to date, Apple is anticipated to have GenAI-related news on both the chip and software/OS/application front around the time of its next WWDC event that typically happens in early June.

Taken together, all these current and expected advances already reflect a profound impact on the device world. And yet, I’d argue that they only represent the tip of the iceberg. First, on the overall market front, there are some encouraging early signs that AI-powered PCs and smartphones will reinvigorate the sales of these recently floundering categories. The biggest impact likely won’t come until the second half of 2024 and perhaps not until 2025, but after a difficult last year or two, that’s still great news.

Even more importantly, we can likely expect some dramatic improvements in the overall usability and capability of our devices because of GenAI. Adding things like truly usable and reliable speech and gesture inputs can open a whole range of new applications and dramatically reduce the frustrations that many people have with their current devices. It will also enable entirely new types of devices, particularly in the wearables world, where the dependence on screens for a UI will start to diminish. Across all of our devices, this will require things like higher quality microphones, more and better sensors, and new and easier ways of connecting with peripherals and other devices.

I also expect we’ll see new types of software architectures, such as distributed applications that perform some of their work on the cloud and some on-device. In fact, I believe this Hybrid AI concept will prove to be one of the primary means of running GenAI applications on devices for the next several years, particularly until there’s a larger installed base of devices with powerful, dedicated AI co-processors.

Of course, eventually, that is just what we’ll have. While we may initially call these new devices AI PCs or AI smartphones, they will soon just be PCs and smartphones once again, and the AI capabilities will be inherent and assumed. It’s very much like how graphics and GPUs came into being. Initially no PCs (or smartphones) had dedicated graphics chips, and it was a big deal when the first GPUs started to be incorporated into them. Now, every device has some level of integrated graphics acceleration and a few—like gaming PCs—still have standalone dedicated GPUs for more demanding needs. I believe almost the exact same thing will happen with NPUs and AI acceleration. Most every device will have some level of AI acceleration within about two to three years, but there will continue to be some that use dedicated AI processors for more advanced applications.

In the meantime, though, we’ll need to figure out how we think, talk about, and categorize these new types of GenAI-influenced computing devices. There’s little doubt that things may get confusing for a while, but it’s also clear that we are headed into some interesting and exciting times in the world of PCs, smartphones, tablets, and wearables.

Here's a link to the original column: https://www.linkedin.com/pulse/how-genai-impact-our-devices-bob-o-donnell-bqlic/?trackingId=Pnc9laZ1TTiP3exc%2FIbMpA%3D%3D

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.